health issue
MAHA Wants Action on Pesticides. It's Not Going to Get It From Trump's Corporate-Friendly EPA
It's Not Going to Get It From Trump's Corporate-Friendly EPA The White House's new Make America Healthy Again strategy makes some asks of the EPA--but critics say the agency is too industry-friendly to make a difference. When Jean-Marie Kauth first read the Make America Healthy Again commission report, released by the White House in May, she was "thrilled about some of the things they identified," she says. "They clearly called out industry as a pernicious influence on why EPA has not been very successful in regulating chemicals, especially pesticides." Kauth's daughter died of leukemia at age 8 after, Kauth says, she was exposed to the insecticide chlorpyrifos, which the EPA banned in 2021. Kauth, a professor at Benedictine University in Illinois, now serves as a member of the EPA's Children's Health Protection Advisory Committee (CHPAC), a group of outside experts who advise the agency on children's health issues.
- North America > United States > Illinois (0.24)
- North America > United States > California (0.04)
- North America > Mexico (0.04)
- (2 more...)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Food & Agriculture > Agriculture (1.00)
Street-Level AI: Are Large Language Models Ready for Real-World Judgments?
Pokharel, Gaurab, Farabi, Shafkat, Fowler, Patrick J., Das, Sanmay
A surge of recent work explores the ethical and societal implications of large-scale AI models that make "moral" judgments. Much of this literature focuses either on alignment with human judgments through various thought experiments or on the group fairness implications of AI judgments. However, the most immediate and likely use of AI is to help or fully replace the so-called street-level bureaucrats, the individuals deciding to allocate scarce social resources or approve benefits. There is a rich history underlying how principles of local justice determine how society decides on prioritization mechanisms in such domains. In this paper, we examine how well LLM judgments align with human judgments, as well as with socially and politically determined vulnerability scoring systems currently used in the domain of homelessness resource allocation. Crucially, we use real data on those needing services (maintaining strict confidentiality by only using local large models) to perform our analyses. We find that LLM prioritizations are extremely inconsistent in several ways: internally on different runs, between different LLMs, and between LLMs and the vulnerability scoring systems. At the same time, LLMs demonstrate qualitative consistency with lay human judgments in pairwise testing.
- Asia > Singapore (0.04)
- North America > United States > Virginia (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- Research Report > Experimental Study (0.69)
- Research Report > New Finding (0.46)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Consumer Health (1.00)
- (3 more...)
AI tools used by English councils downplay women's health issues, study finds
Artificial intelligence tools used by more than half of England's councils are downplaying women's physical and mental health issues and risk creating gender bias in care decisions, research has found. The study found that when using Google's AI tool "Gemma" to generate and summarise the same case notes, language such as "disabled", "unable" and "complex" appeared significantly more often in descriptions of men than women. The study, by the London School of Economics and Political Science (LSE), also found that similar care needs in women were more likely to be omitted or described in less serious terms. Dr Sam Rickman, the lead author of the report and a researcher in LSE's Care Policy and Evaluation Centre, said AI could result in "unequal care provision for women". "We know these models are being used very widely and what's concerning is that we found very meaningful differences between measures of bias in different models," he said.
YOLOv8-Based Deep Learning Model for Automated Poultry Disease Detection and Health Monitoring paper
Sabbella, Akhil Saketh Reddy, Prachothan, Ch. Lakshmi, Panta, Eswar Kumar
In the poultry industry, detecting chicken illnesses is essential to avoid financial losses. Conventional techniques depend on manual observation, which is laborious and prone to mistakes. Using YOLO v8 a deep learning model for real-time object recognition. This study suggests an AI based approach, by developing a system that analyzes high resolution chicken photos, YOLO v8 detects signs of illness, such as abnormalities in behavior and appearance. A sizable, annotated dataset has been used to train the algorithm, which provides accurate real-time identification of infected chicken and prompt warnings to farm operators for prompt action. By facilitating early infection identification, eliminating the need for human inspection, and enhancing biosecurity in large-scale farms, this AI technology improves chicken health management. The real-time features of YOLO v8 provide a scalable and effective method for improving farm management techniques.
- Asia > India > Tamil Nadu > Chennai (0.05)
- North America > United States > Indiana (0.04)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Consumer Health (1.00)
- Food & Agriculture > Agriculture (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.94)
Large Language Models' Varying Accuracy in Recognizing Risk-Promoting and Health-Supporting Sentiments in Public Health Discourse: The Cases of HPV Vaccination and Heated Tobacco Products
Kim, Soojong, Kim, Kwanho, Kim, Hye Min
Machine learning methods are increasingly applied to analyze health-related public discourse based on large-scale data, but questions remain regarding their ability to accurately detect different types of health sentiments. Especially, Large Language Models (LLMs) have gained attention as a powerful technology, yet their accuracy and feasibility in capturing different opinions and perspectives on health issues are largely unexplored. Thus, this research examines how accurate the three prominent LLMs (GPT, Gemini, and LLAMA) are in detecting risk-promoting versus health-supporting sentiments across two critical public health topics: Human Papillomavirus (HPV) vaccination and heated tobacco products (HTPs). Drawing on data from Facebook and Twitter, we curated multiple sets of messages supporting or opposing recommended health behaviors, supplemented with human annotations as the gold standard for sentiment classification. The findings indicate that all three LLMs generally demonstrate substantial accuracy in classifying risk-promoting and health-supporting sentiments, although notable discrepancies emerge by platform, health issue, and model type. Specifically, models often show higher accuracy for risk-promoting sentiment on Facebook, whereas health-supporting messages on Twitter are more accurately detected. An additional analysis also shows the challenges LLMs face in reliably detecting neutral messages. These results highlight the importance of carefully selecting and validating language models for public health analyses, particularly given potential biases in training data that may lead LLMs to overestimate or underestimate the prevalence of certain perspectives.
- North America > United States > California > Yolo County > Davis (0.28)
- Oceania > Australia (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (7 more...)
Decoding Linguistic Nuances in Mental Health Text Classification Using Expressive Narrative Stories
Tang, Jinwen, Guo, Qiming, Zhao, Yunxin, Shang, Yi
Recent advancements in NLP have spurred significant interest in analyzing social media text data for identifying linguistic features indicative of mental health issues. However, the domain of Expressive Narrative Stories (ENS)-deeply personal and emotionally charged narratives that offer rich psychological insights-remains underexplored. This study bridges this gap by utilizing a dataset sourced from Reddit, focusing on ENS from individuals with and without self-declared depression. Our research evaluates the utility of advanced language models, BERT and MentalBERT, against traditional models. We find that traditional models are sensitive to the absence of explicit topic-related words, which could risk their potential to extend applications to ENS that lack clear mental health terminology. Despite MentalBERT is design to better handle psychiatric contexts, it demonstrated a dependency on specific topic words for classification accuracy, raising concerns about its application when explicit mental health terms are sparse (P-value<0.05). In contrast, BERT exhibited minimal sensitivity to the absence of topic words in ENS, suggesting its superior capability to understand deeper linguistic features, making it more effective for real-world applications. Both BERT and MentalBERT excel at recognizing linguistic nuances and maintaining classification accuracy even when narrative order is disrupted. This resilience is statistically significant, with sentence shuffling showing substantial impacts on model performance (P-value<0.05), especially evident in ENS comparisons between individuals with and without mental health declarations. These findings underscore the importance of exploring ENS for deeper insights into mental health-related narratives, advocating for a nuanced approach to mental health text analysis that moves beyond mere keyword detection.
- North America > United States > Missouri > Boone County > Columbia (0.14)
- South America > Brazil (0.04)
- Oceania > Australia (0.04)
- North America > United States > Texas > Nueces County > Corpus Christi (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
"It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool for LLM Adoption in Public Health
Zhou, Jiawei, Chen, Amy Z., Shah, Darshi, Reese, Laura Schwab, De Choudhury, Munmun
Recent breakthroughs in large language models (LLMs) have generated both interest and concern about their potential adoption as accessible information sources or communication tools across different domains. In public health -- where stakes are high and impacts extend across populations -- adopting LLMs poses unique challenges that require thorough evaluation. However, structured approaches for assessing potential risks in public health remain under-explored. To address this gap, we conducted focus groups with health professionals and health issue experiencers to unpack their concerns, situated across three distinct and critical public health issues that demand high-quality information: vaccines, opioid use disorder, and intimate partner violence. We synthesize participants' perspectives into a risk taxonomy, distinguishing and contextualizing the potential harms LLMs may introduce when positioned alongside traditional health communication. This taxonomy highlights four dimensions of risk in individual behaviors, human-centered care, information ecosystem, and technology accountability. For each dimension, we discuss specific risks and example reflection questions to help practitioners adopt a risk-reflexive approach. This work offers a shared vocabulary and reflection tool for experts in both computing and public health to collaboratively anticipate, evaluate, and mitigate risks in deciding when to employ LLM capabilities (or not) and how to mitigate harm when they are used.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > Texas (0.04)
- (6 more...)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Public Health (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- (2 more...)
Revealed: The 10 countries that produce the most plastic pollution around the world - with India topping the list
While sorting your plastic recycling might be frustrating, scientists warn that a lack of waste collection could be deadly for millions around the world. Scientists from the University of Leeds have used AI modelling to reveal the 10 countries responsible for the most plastic pollution. Overall, the researchers calculate that 52 million tonnes of uncollected plastic waste entered the environment in 2020, creating a serious health risk for those exposed. India topped the table as the biggest producer of plastic pollution - creating 9.3 million tonnes of waste in a single year - followed by Nigeria and Indonesia. Lead author Dr Costas Velis says: 'This is an urgent global human health issue -- an ongoing crisis: people whose waste is not collected have no option but to dump or burn it.'
- Asia > Indonesia (0.28)
- Europe > United Kingdom (0.06)
- Africa > Sub-Saharan Africa (0.06)
- (5 more...)
Unveiling and Mitigating Bias in Mental Health Analysis with Large Language Models
Wang, Yuqing, Zhao, Yun, Keller, Sara Alessandra, de Hond, Anne, van Buchem, Marieke M., Pillai, Malvika, Hernandez-Boussard, Tina
The advancement of large language models (LLMs) has demonstrated strong capabilities across various applications, including mental health analysis. However, existing studies have focused on predictive performance, leaving the critical issue of fairness underexplored, posing significant risks to vulnerable populations. Despite acknowledging potential biases, previous works have lacked thorough investigations into these biases and their impacts. To address this gap, we systematically evaluate biases across seven social factors (e.g., gender, age, religion) using ten LLMs with different prompting methods on eight diverse mental health datasets. Our results show that GPT-4 achieves the best overall balance in performance and fairness among LLMs, although it still lags behind domain-specific models like MentalRoBERTa in some cases. Additionally, our tailored fairness-aware prompts can effectively mitigate bias in mental health predictions, highlighting the great potential for fair analysis in this field.
- Europe > United Kingdom (0.04)
- North America > Canada (0.04)
- Africa > Nigeria (0.04)
- (15 more...)
I'm Dying to Have a Threesome With Two Men. Why Does Every Attempt Fall Apart in the Same Way?
How to Do It is Slate's sex advice column. Send it to Jessica and Rich here. I'm (35F) very interested in having a threesome and have been working the apps to try to find the right person to help make this happen. I've had a few bites. I was sexting with one guy for days on end about our joint fantasy of making this happen, and I found a second guy, who said he'd like to join us.